New Sequential and Parallel Derivative-Free Algorithms for Unconstrained Minimization

نویسندگان

  • Ubaldo M. García-Palomares
  • J. F. Rodríguez
چکیده

This paper presents sequential and parallel derivative-free algorithms for finding a local minimum of smooth and nonsmooth functions of practical interest. It is proved that, under mild assumptions, a sufficient decrease condition holds for a nonsmooth function. Based on this property, the algorithms explore a set of search directions and move to a point with a sufficiently lower functional value. If the function is strictly differentiable at its limit points, a (sub)sequence of points generated by the algorithm converges to a first-order stationary point (∇f(x) = 0). If the function is convex around its limit points, convergence (of a subsequence) to a point with nonnegative directional derivatives on a set of search directions is ensured. Preliminary numerical results on sequential algorithms show that they compare favorably with the recently introduced pattern search methods.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Parallel Synchronous and Asynchronous Space-Decomposition Algorithms for Large-Scale Minimization Problems

Three parallel space-decomposition minimization (PSDM) algorithms, based on the parallel variable transformation (PVT) and the parallel gradient distribution (PGD) algorithms (O.L. Mangasarian, SIMA Journal on Control and Optimization, vol. 33, no. 6, pp. 1916–1925.), are presented for solving convex or nonconvex unconstrained minimization problems. The PSDM algorithms decompose the variable sp...

متن کامل

A Globally Convergent Primal-dual Interior-point Filter Method for Nonlinear Programming: New Filter Optimality Measures and Computational Results

In this paper we prove global convergence for first and second-order stationarity points of a class of derivative-free trust-region methods for unconstrained optimization. These methods are based on the sequential minimization of linear or quadratic models built from evaluating the objective function at sample sets. The derivative-free models are required to satisfy Taylor-type bounds but, apar...

متن کامل

Global Convergence of General Derivative-Free Trust-Region Algorithms to First- and Second-Order Critical Points

In this paper we prove global convergence for first and second-order stationary points of a class of derivative-free trust-region methods for unconstrained optimization. These methods are based on the sequential minimization of quadratic (or linear) models built from evaluating the objective function at sample sets. The derivative-free models are required to satisfy Taylor-type bounds but, apar...

متن کامل

On the Oracle Complexity of First-Order and Derivative-Free Algorithms for Smooth Nonconvex Minimization

The (optimal) function/gradient evaluations worst-case complexity analysis available for the Adaptive Regularizations algorithms with Cubics (ARC) for nonconvex smooth unconstrained optimization is extended to finite-difference versions of this algorithm, yielding complexity bounds for first-order and derivative free methods applied on the same problem class. A comparison with the results obtai...

متن کامل

An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization

We consider an unconstrained problem of minimization of a smooth convex function which is only available through noisy observations of its values, the noise consisting of two parts. Similar to stochastic optimization problems, the first part is of a stochastic nature. On the opposite, the second part is an additive noise of an unknown nature, but bounded in the absolute value. In the two-point ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • SIAM Journal on Optimization

دوره 13  شماره 

صفحات  -

تاریخ انتشار 2002